Scalable Service Differentiation in a Shared Storage Cache
نویسندگان
چکیده
Motivated by the need to enable easier data sharing and curb rising storage management costs, storage systems are becoming increasingly consolidated and thereby shared by a large number of users and applications. In such environments, service differentiation becomes increasingly important. Since caching is a fundamental and pervasive technique employed to improve the performance of storage systems, providing differentiated services from a storage cache is a crucial component of the entire end-to-end QoS solution. In this paper, we discuss a QoS architecture for a shared storage proxy cache which can provide long-term hit rate assurances to competing classes. The proposed architecture consists of three components: (a) per-class feedback controllers that track the performance of each class, (b) a fairness controller that allocates excess resources fairly in the case when all goals are met, and (c) a contention resolver that decides cache allocation in the case when at least one class does not meet its target hit rate. We compare the performance of various feedback per-class controllers, and provide guidelines for designing QoS mechanisms for such a dynamic environment.
منابع مشابه
Dynamic partitioning of the cache hierarchy in shared data centers
Due to the imperative need to reduce the management costs of large data centers, operators multiplex several concurrent database applications on a server farm connected to shared network attached storage. Determining and enforcing perapplication resource quotas in the resulting cache hierarchy, on the fly, poses a complex resource allocation problem spanning the database server and the storage ...
متن کاملThe Vantage Cache-partitioning Technique Enables Configurability and Quality-of-service Guarantees in Large-scale Chip Multiprocessors with Shared Caches. Caches Can Have Hundreds of Partitions with Sizes Specified at Cache Line Granularity, While Maintaining High Associativity and Strict Isolation among Partitions
......Shared caches are pervasive in chip multiprocessors (CMPs). In particular, CMPs almost always feature a large, fully shared last-level cache (LLC) to mitigate the high latency, high energy, and limited bandwidth of main memory. A shared LLC has several advantages over multiple, private LLCs: it increases cache utilization, accelerates intercore communication (which happens through the cac...
متن کاملAutomatic Parallelization for Non-cache Coherent Multiprocessors
Although much work has been done on parallelizing compilers for cache coherent shared memory multiprocessors and message-passing multiprocessors, there is relatively little research on parallelizing compilers for noncache coherent multiprocessors with global address space. In this paper, we present a preliminary study on automatic parallelization for the Cray T3D, a commercial scalable machine ...
متن کاملDistributed Volume Rendering for Scalable High-resolution Display Arrays
This work presents a distributed image-order volume rendering approach for scalable high-resolution displays. This approach preprocesses data into a conventional hierarchical structure which is distributed across the local storage of a distributed-memory cluster. The cluster is equipped with graphics cards capable of hardware accelerated texture rendering. The novel contribution of this work is...
متن کاملService Differentiation in Web Caching and Content Distribution
Service differentiation in web caching and content distribution will result in significant technical and economic efficiency gains, to the benefit of both content publishers and service providers. Through preferential storage allocation and coordinated transitioning of objects across priority queues, we demonstrate a QoS caching scheme that achieves quantifiable service differentiation with lit...
متن کامل